142 research outputs found

    The Persuasive Effect of Privacy Recommendations

    Get PDF
    Several researchers have recently suggested that in order to avoid privacy problems, location-sharing services should provide finer-grained methods of location-sharing. This may however turn each “check-in” into a rather complex decision that puts an unnecessary burden on the user. We present two studies that explore ways to help users with such location-sharing decisions. Study 1 shows that users’ evaluation of their activity is a good predictor of the sharing action they choose. Study 2 develops several “privacy recommenders” that tailor the list of sharing actions to this activity evaluation. We find that these recommenders have a strong persuasive effect, and that users find short lists of recommended actions helpful. We also find, however, that users ultimately find it more satisfying if we do not ask them to evaluate the activity

    Inferring Capabilities of Intelligent Agents

    Get PDF
    We investigate the usability of human-like agent-based interfaces. In an experiment we manipulate the capabili­ties and the “human-likeness” of a travel advisory agent. We show that users of the more human-like agent form an anthropomorphic use image of the system: they act as if the system is human, and try to exploit typical human-like capabilities. Unfortu­nately, this severely reduces the usa­bility of the agent that looks human but lacks human-like capabilities (overestima­tion effect). We also show that the use image users form of agent-based systems is inherently integrated (as opposed to the compositional use image they form of conventional GUIs): cues provided by the system do not instill user responses in a one-to-one manner, but are instead integrated into a single use image. Consequently, users try to exploit capabilities that were not signaled by the system to begin with, thereby further exacerbating the overestimation effect

    Counteracting the Negative Effect of Form Auto-completion on the Privacy Calculus

    Get PDF
    When filling out web forms, people typically do not want to submit every piece of requested information to every website. Instead, they selectively disclose information after weighing the potential benefits and risks of disclosure: a process called “privacy calculus”. Giving users control over what to enter is a prerequisite for this selective disclosure behavior. Exercising this control by manually filling out a form is a burden though. Modern browsers therefore offer an auto-completion feature that automatically fills out forms with previously stored values. This feature is convenient, but it makes it so easy to submit a fully completed form that users seem to skip the privacy calculus altogether. In an experiment we compare this traditional auto-completion tool with two alternative tools that give users more control than the traditional tool. While users of the traditional tool indeed forego their selective disclosure behavior, the alternative tools effectively reinstate the privacy calculus

    Exacerbating Mindless Compliance: The Danger of Justifications during Privacy Decision Making in the Context of Facebook Applications

    Get PDF
    Online companies exploit mindless compliance during users’ privacy decision making to avoid liability while not impairing users’ willingness to use their services. These manipulations can play against users since they subversively influence their decisions by nudging them to mindlessly comply with disclosure requests rather than enabling them to make deliberate choices. In this paper, we demonstrate the compliance-inducing effects of defaults and framing in the context of a Facebook application that nudges people to be automatically publicly tagged in their friends’ photos and/or to tag their friends in their own photos. By studying these effects in a Facebook application, we overcome a common criticism of privacy research, which often relies on hypothetical scenarios. Our results concur with previous findings on framing and default effects. Specifically, we found a reduction in privacy-preserving behaviors (i.e., a higher tagging rate in our case) in positively framed and accept-by-default decision scenarios. Moreover, we tested the effect that two types of justifications—information that implies what other people do (normative) or what the user ought to do (rationale based)— have on framing- and default-induced compliance. Existing work suggests that justifications may increase compliance in a positive (agree-by-) default scenario even when the justification does not relate to the decision. In this study, we expand this finding and show that even a justification that is opposite to the default action (e.g., a justification suggesting that one should not use the application) can increase mindless compliance with the default. Thus, when companies abide by policy makers’ requirements to obtain informed user consent through explaining the privacy settings, they will paradoxically induce mindless compliance and further threaten user privacy

    Reducing Default and Framing Effects in Privacy Decision-Making

    Get PDF
    Framing and default effects have been studied for more than a decade in different disciplines. A common criticism of these studies is that they use hypothetical scenarios. In this study, we developed a real decision environment: a Facebook application in which users had to decide whether or not they wanted to be automatically publicly tagged in their friends’ pictures and/or tag their friends in their own pictures. To ensure ecological validity, participants had to log in to their Facebook account. Our results confirmed previous studies indicating a higher tagging rate in positively framed and accept-by-default conditions. Furthermore, we introduced a manipulation that we assumed would overshadow and thereby reduce the effects of default and framing: a justification highlighting a positive or negative descriptive social norm or giving a rationale for or against tagging. We found that such justifications may at times increase tagging rates

    Explaining recommendations in an interactive hybrid social recommender

    Get PDF
    Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability

    The Pursuit of Transparency and Control: A Classification of Ad Explanations in Social Media

    Get PDF
    Online advertising on social media platforms has been at the center of recent controversies over growing concerns regarding users\u27 privacy, dishonest data collection, and a lack of transparency and control. Facing public pressure, some social media platforms have opted to implement explanatory tools in an effort to empower consumers and shed light on marketing practices. Yet, to date research shows significant inconsistencies around how ads should be explained. To address this issue, we conduct a systematic literature review on ad explanations, covering existing research on how they are generated, presented, and perceived by users. Based on this review, we present a classification scheme of ad explanations that offers insights into the reasoning behind the ad recommendation, the objective of the explanation, the content of the explanation, and how this content should be presented. Moreover, we identify challenges that are unaddressed by either current research or explanatory tools deployed in practice, and we discuss avenues for future research to address these challenges. This paper calls attention to and helps to solidify an agenda for interdisciplinary communities to collaboratively approach the design and implementation of explanations for online ads in social media
    • 

    corecore